敏感性张量成像(STI)是一种新兴的磁共振成像技术,它以二阶张量模型来表征各向异性组织磁敏感性。 STI有可能为白质纤维途径的重建以及在MM分辨率下的大脑中的髓磷脂变化的检测提供信息,这对于理解健康和患病大脑的大脑结构和功能具有很大的价值。但是,STI在体内的应用受到了繁琐且耗时的采集要求,以测量易感性引起的MR相变为多个(通常超过六个)的头部方向。由于头圈的物理限制,头部旋转角的限制增强了这种复杂性。结果,STI尚未广泛应用于体内研究。在这项工作中,我们通过为STI的图像重建算法提出利用数据驱动的先验来解决这些问题。我们的方法称为DEEPSTI,通过深层神经网络隐式地了解了数据,该网络近似于STI的正常器函数的近端操作员。然后,使用学习的近端网络对偶极反转问题进行迭代解决。使用模拟和体内人类数据的实验结果表明,根据重建张量图,主要特征向量图和拖拉术结果,对最先进的算法的改进很大六个不同的方向。值得注意的是,我们的方法仅在人体内的一个方向上实现了有希望的重建结果,我们证明了该技术在估计多发性硬化症患者中估计病变易感性各向异性的潜在应用。
translated by 谷歌翻译
Variational autoencoders and Helmholtz machines use a recognition network (encoder) to approximate the posterior distribution of a generative model (decoder). In this paper we study the necessary and sufficient properties of a recognition network so that it can model the true posterior distribution exactly. These results are derived in the general context of probabilistic graphical modelling / Bayesian networks, for which the network represents a set of conditional independence statements. We derive both global conditions, in terms of d-separation, and local conditions for the recognition network to have the desired qualities. It turns out that for the local conditions the property perfectness (for every node, all parents are joined) plays an important role.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Deep learning-based pose estimation algorithms can successfully estimate the pose of objects in an image, especially in the field of color images. 6D Object pose estimation based on deep learning models for X-ray images often use custom architectures that employ extensive CAD models and simulated data for training purposes. Recent RGB-based methods opt to solve pose estimation problems using small datasets, making them more attractive for the X-ray domain where medical data is scarcely available. We refine an existing RGB-based model (SingleShotPose) to estimate the 6D pose of a marked cube from grayscale X-ray images by creating a generic solution trained on only real X-ray data and adjusted for X-ray acquisition geometry. The model regresses 2D control points and calculates the pose through 2D/3D correspondences using Perspective-n-Point(PnP), allowing a single trained model to be used across all supporting cone-beam-based X-ray geometries. Since modern X-ray systems continuously adjust acquisition parameters during a procedure, it is essential for such a pose estimation network to consider these parameters in order to be deployed successfully and find a real use case. With a 5-cm/5-degree accuracy of 93% and an average 3D rotation error of 2.2 degrees, the results of the proposed approach are comparable with state-of-the-art alternatives, while requiring significantly less real training examples and being applicable in real-time applications.
translated by 谷歌翻译
With big data becoming increasingly available, IoT hardware becoming widely adopted, and AI capabilities becoming more powerful, organizations are continuously investing in sensing. Data coming from sensor networks are currently combined with sensor fusion and AI algorithms to drive innovation in fields such as self-driving cars. Data from these sensors can be utilized in numerous use cases, including alerts in safety systems of urban settings, for events such as gun shots and explosions. Moreover, diverse types of sensors, such as sound sensors, can be utilized in low-light conditions or at locations where a camera is not available. This paper investigates the potential of the utilization of sound-sensor data in an urban context. Technically, we propose a novel approach of classifying sound data using the Wigner-Ville distribution and Convolutional Neural Networks. In this paper, we report on the performance of the approach on open-source datasets. The concept and work presented is based on my doctoral thesis, which was performed as part of the Engineering Doctorate program in Data Science at the University of Eindhoven, in collaboration with the Dutch National Police. Additional work on real-world datasets was performed during the thesis, which are not presented here due to confidentiality.
translated by 谷歌翻译
To apply federated learning to drug discovery we developed a novel platform in the context of European Innovative Medicines Initiative (IMI) project MELLODDY (grant n{\deg}831472), which was comprised of 10 pharmaceutical companies, academic research labs, large industrial companies and startups. The MELLODDY platform was the first industry-scale platform to enable the creation of a global federated model for drug discovery without sharing the confidential data sets of the individual partners. The federated model was trained on the platform by aggregating the gradients of all contributing partners in a cryptographic, secure way following each training iteration. The platform was deployed on an Amazon Web Services (AWS) multi-account architecture running Kubernetes clusters in private subnets. Organisationally, the roles of the different partners were codified as different rights and permissions on the platform and administrated in a decentralized way. The MELLODDY platform generated new scientific discoveries which are described in a companion paper.
translated by 谷歌翻译
从有限的资源中获得最大收益可以进步自然语言处理(NLP)研究和实践,同时保守资源。这些资源可能是数据,时间,存储或能源。NLP的最新工作从缩放率产生了有趣的结果。但是,仅使用比例来改善结果意味着资源消耗也会扩展。这种关系激发了对有效方法的研究,这些方法需要更少的资源才能获得相似的结果。这项调查涉及NLP效率的方法和发现,旨在指导该领域的新研究人员并激发新方法的发展。
translated by 谷歌翻译
黑色素瘤是一种严重的皮肤癌,在后期阶段高死亡率。幸运的是,当早期发现时,黑色素瘤的预后是有希望的,恶性黑色素瘤的发病率相对较低。结果,数据集严重不平衡,这使培训当前的最新监督分类AI模型变得复杂。我们建议使用生成模型来学习良性数据分布,并通过密度估计检测出分布(OOD)恶性图像。标准化流(NFS)是OOD检测的理想候选者,因为它们可以计算精确的可能性。然而,它们的感应偏见对明显的图形特征而不是语义上下文障碍障碍的OOD检测。在这项工作中,我们旨在将这些偏见与黑色素瘤的领域水平知识一起使用,以改善基于可能性的OOD检测恶性图像。我们令人鼓舞的结果表明,使用NFS检测黑色素瘤的可能性。我们通过使用基于小波的NFS,在接收器工作特性的曲线下,面积增加了9%。该模型需要较少的参数,以使其更适用于边缘设备。拟议的方法可以帮助医学专家诊断出皮肤癌患者并不断提高存活率。此外,这项研究为肿瘤学领域的其他领域铺平了道路,具有类似的数据不平衡问题\ footNote {代码可用:
translated by 谷歌翻译
胰腺癌是与癌症相关死亡的全球主要原因之一。尽管深度学习在计算机辅助诊断和检测方法(CAD)方法中取得了成功,但很少关注胰腺癌的检测。我们提出了一种检测胰腺肿瘤的方法,该方法在周围的解剖结构中利用临床上的特征,从而更好地旨在利用放射科医生的知识,而不是其他常规的深度学习方法。为此,我们收集了一个新的数据集,该数据集由99例胰腺导管腺癌(PDAC)和97例没有胰腺肿瘤的对照病例组成。由于胰腺癌的生长模式,肿瘤可能总是可见为低音病变,因此,专家指的是二次外部特征的可见性,这些特征可能表明肿瘤的存在。我们提出了一种基于U-NET样深的CNN的方法,该方法利用以下外部次要特征:胰管,常见的胆管和胰腺以及处理后的CT扫描。使用这些功能,该模型如果存在胰腺肿瘤。这种用于分类和本地化方法的细分实现了99%的敏感性(一个案例)和99%的特异性,这比以前的最新方法的灵敏度增加了5%。与以前的PDAC检测方法相比,该模型还以合理的精度和较短的推理时间提供位置信息。这些结果提供了显着的性能改善,并强调了在开发新型CAD方法时纳入临床专家知识的重要性。
translated by 谷歌翻译
我们介绍了Caltech Fish计数数据集(CFC),这是一个用于检测,跟踪和计数声纳视频中鱼类的大型数据集。我们将声纳视频识别为可以推进低信噪比计算机视觉应用程序并解决多对象跟踪(MOT)和计数中的域概括的丰富数据来源。与现有的MOT和计数数据集相比,这些数据集主要仅限于城市中的人和车辆的视频,CFC来自自然世界领域,在该域​​中,目标不容易解析,并且无法轻易利用外观功能来进行目标重新识别。 CFC允许​​研究人员训练MOT和计数算法并评估看不见的测试位置的概括性能。我们执行广泛的基线实验,并确定在MOT和计数中推进概括的最新技术的关键挑战和机会。
translated by 谷歌翻译